Text-to-speech (TTS) research aims to develop models that can produce natural sound synthesized utterance, given a piece of text as input. Pushing the edge of the general naturalness of the synthesized utterance, several state-of-the-art models such as Tacotron and DeepVoice3 achieve excellent results in improving the quality of synthesized speech. To aim at more realistic speech synthesis, prosody-flexible TTS, also called expressive TTS has recently becomes a topic of significant research.
In this work, we propose a prosody transfer text-to-speech synthesis model. A token table and weights are also learned with the reference input to factorize the possible styles in an unsupervised manner. The results show our model can successfully factorize the reference prosodies to represent characteristics of different speakers and styles, under unsupervised learning from the training data.

Token representation in our model are an approach to factorize prosodies of the training dataset. During the test phase, if we only clip to specific token, the synthesis result would represent that token's learned prosody.
To validate the effectiveness of our model, we train the prposed model using VCTK dataset and set the speaker to be one. The most significant variation in the dataset is the speaker voice characteristics. The token is supposed to learn different speaker's voice. In the test, we found the results are as expectation. The synthesized results clipped to specific token are voices of different person.
To listen, files are at following:
To further test the model, we train the model using an internal dataset. There is only one speaker inside the dataset and the tokens are supposed to learn different prosody representation as the most significant style variation inside the dataset.
To listen, files are at following:
Another example is that we train the proposed model on Blizzard 2013 dataset, which is a single speaker dataset containing recordings of audio books. The books vary from novels to bible, fictions to narrative. So the prosodies inside the dataset vary signicantly, ideal for style learning. Again, here, different token is supposed to represent different prosody.
To listen, files are at following:
Utterance text content: My mother always took him to the town on a market day in a light gig.
Utterance text content: So we never saw Dick any more.
Utterance text content: You will be to visit me in prison with a basket of provisions, you will not refuse to visit me in prison?
The following shows three example of unparallel prosody transfer synthesis.
In each example, text of the utterance to synthesis is different from the reference's.
The prosody of the unparallel reference utterance will be transfered to the synthesis results having different text contents.